52 research outputs found

    Balanced Coarsening for Multilevel Hypergraph Partitioning via Wasserstein Discrepancy

    Full text link
    We propose a balanced coarsening scheme for multilevel hypergraph partitioning. In addition, an initial partitioning algorithm is designed to improve the quality of k-way hypergraph partitioning. By assigning vertex weights through the LPT algorithm, we generate a prior hypergraph under a relaxed balance constraint. With the prior hypergraph, we have defined the Wasserstein discrepancy to coordinate the optimal transport of coarsening process. And the optimal transport matrix is solved by Sinkhorn algorithm. Our coarsening scheme fully takes into account the minimization of connectivity metric (objective function). For the initial partitioning stage, we define a normalized cut function induced by Fiedler vector, which is theoretically proved to be a concave function. Thereby, a three-point algorithm is designed to find the best cut under the balance constraint

    Attention Guided 3D U-Net for KiTS19

    Get PDF
    We use a two-stage 3d U-Net model to predict the multi channels segmentations from coarse to fine. The second stage is guided by the predictions from the first stage. 1 Method We proposed a two stages method to segment CT image from coarse to fine. The two stages are trained with different learning scope and are assigned with different learning missions. 1.1 Stage 1 – Coarse stage Data preprocess. Firstly, we downscale the training data to a normal shape, in order to make sure the model can take a whole image at once. All the images and segmentations are downscale to 128*128*32 (height*width*depth). The segmentation files are transformed to 3-channels arrays, in which the channels-wise pixel values represent kidneys, tumors and the background (without kidneys and tumors) in order. Training. We train the standard 3D U-Net follow with a softmax layer. While training, we apply some data augmentation to the training data, including normalize, random contrast, random flip and random rotate. We input all the 210 cases training data and train the model to regress the multi-channel segmentations. We apply with pytorch, and the learning rate is 0.1 which divide 0.1 in 300000 epochs and 500000 epochs. We use the Binary Cross Entropy Loss as loss function. Predicting. The 90 cases testing images are preprocessed the same with the training images then input to the trained model. The channel-wise predictions are scaled back the original shape. We take the first 2 channels of the predictions, represent as the segmentation of kidneys and tumors, then package as the .nii.gz files

    Deep Clustering Survival Machines with Interpretable Expert Distributions

    Full text link
    Conventional survival analysis methods are typically ineffective to characterize heterogeneity in the population while such information can be used to assist predictive modeling. In this study, we propose a hybrid survival analysis method, referred to as deep clustering survival machines, that combines the discriminative and generative mechanisms. Similar to the mixture models, we assume that the timing information of survival data is generatively described by a mixture of certain numbers of parametric distributions, i.e., expert distributions. We learn weights of the expert distributions for individual instances according to their features discriminatively such that each instance's survival information can be characterized by a weighted combination of the learned constant expert distributions. This method also facilitates interpretable subgrouping/clustering of all instances according to their associated expert distributions. Extensive experiments on both real and synthetic datasets have demonstrated that the method is capable of obtaining promising clustering results and competitive time-to-event predicting performance

    Hybrid Graph Neural Networks for Crowd Counting

    Full text link
    Crowd counting is an important yet challenging task due to the large scale and density variation. Recent investigations have shown that distilling rich relations among multi-scale features and exploiting useful information from the auxiliary task, i.e., localization, are vital for this task. Nevertheless, how to comprehensively leverage these relations within a unified network architecture is still a challenging problem. In this paper, we present a novel network structure called Hybrid Graph Neural Network (HyGnn) which targets to relieve the problem by interweaving the multi-scale features for crowd density as well as its auxiliary task (localization) together and performing joint reasoning over a graph. Specifically, HyGnn integrates a hybrid graph to jointly represent the task-specific feature maps of different scales as nodes, and two types of relations as edges:(i) multi-scale relations for capturing the feature dependencies across scales and (ii) mutual beneficial relations building bridges for the cooperation between counting and localization. Thus, through message passing, HyGnn can distill rich relations between the nodes to obtain more powerful representations, leading to robust and accurate results. Our HyGnn performs significantly well on four challenging datasets: ShanghaiTech Part A, ShanghaiTech Part B, UCF_CC_50 and UCF_QNRF, outperforming the state-of-the-art approaches by a large margin.Comment: To appear in AAAI 202
    • …
    corecore